I. Executive Summary
The OGC Climate Resilience Community brings decision makers, scientists, policy makers, data providers, software developers, and service providers together. The goal is to enable everyone to take the relevant actions to address climate change and make well informed decisions for climate change adaptation.
This pilot brought together data and processing pipelines in the form of various ‘components’ from lots of organizations, available at different scales for large and small areas to be integrated with scientific processes, analytical models, and simulation environments. These were all pesented, including challenges. No single organization has all the data we need to understand the consequences of climate change. As such, the OGC Climate Resilience Pilot identified, discussed, and developed these resources a bit further in a first attempt, thereby enabling the OGC community to start building the guidebooks and Best Practices. This pilot also experimented with new technologies to share data and information to collaboratively start addressing shared challenges.
This pilot sets the starting point for the OGC Climate Resilience Community’s vision to support efforts on climate actions and enable international partnerships (SDG 17), and move towards global interoperable open digital infrastructures providing climate resilience information on users demand. In this sense, this pilot contributes to establishing an OGC climate resilience concept store for the community where all appropriate climate information to build climate resilience information systems as open infrastructures can be found in one place, be it Information about data services, tools, software, handbooks, or a place to discuss experiences and needs. The concept store covers all phases of Climate Resilience, from initial hazards identification and mapping to vulnerability and risk analysis to options assessments, prioritization, and planning, and ends with implementation planning and monitoring capabilities.
Broadly speaking, this pilot attempts to answer questions such as:
What use-cases can be realized with the current data, services, analytical functions, and visualization capabilities that we have?
How much effort is it to realize these use-cases?
What is missing, or needs to be improved, in order to transfer the use-cases developed in the pilot to other areas?
II. Keywords
The following are keywords to be used by search engines and document catalogues.
Climate Resilience, data, ARD, component, use case, FAIR, Drought, Heat, Fire, Floods
III. Security considerations
No security considerations have been made for this document.
IV. Introduction
IV.A. Enhancing Interoperability for Climate Resilience Information Systems
The OGC Climate Resilience Pilot will be the first phase of multiple long term climate activities aiming to evolve geospatial data, technologies, and other capabilities into valuable information for decision makers, scientists, policy makers, data providers, software developers, and service providers so we can make valuable, informed decisions to improve climate action. The goal is to help the location community develop more powerful visualization and communication tools to accurately address ongoing climate threats such as heat, drought, floods, fires as well as supporting the national determined contributions for greenhouse gas emission reduction. Climate resilience is often considered the use case of our lifetime, and the OGC community is uniquely positioned to accelerate solutions through collective problem solving with this initiative.
Figure 1
As illustrated, big, raw data from multiple sources requires further processing in order to be ready for analysis and climate change impact assessments. Applying data enhancement steps, such as bias adjustments, re-gridding, or calculation of climate indicators and essential variables, leads to “Decision Ready Indicators.” The spatial data infrastructures required for this integration should be designed with interoperable building blocks following FAIR data principles. Heterogeneous data from multiple sources can be enhanced, adjusted, refined, or quality controlled to provide Science Services data products for Climate Resilience. The OGC Climate Change Services Pilots will also illustrate the graphical exploration of the Decision Ready Climate Data. It will demonstrate how to design FAIR climate services information systems. The OGC Pilot demonstrators will illustrate the necessary tools and the visualisations to address climate actions moving towards climate resilience.
IV.B. The Role of the Pilot
The OGC Climate Resilience Community brings decision makers, scientists, policy makers, data providers, software developers, and service providers together. The goal is to enable everyone to take the relevant actions to address climate change and make well informed decisions for climate change adaptation. This includes scientists, decision makers, city managers, politicians, and last but not least, it includes everyone of us. So what do we need? We need data from lots of organizations, available at different scales for large and small areas to be integrated with scientific processes, analytical models, and simulation environments. We need data visualization and communication tools to shape the message in the right way for any client. Many challenges can be met through resources that adhere to FAIR principles. FAIR as in: Findable, Accessible, Interoperable, and Reusable. No single organization has all the data we need to understand the consequences of climate change. The OGC Climate Resilience Community identifies, discusses, and develops these resources. The OGC community builds the guidebooks and Best Practices, it experiments with new technologies to share data and information, and collaboratively addresses shared challenges.
The OGC Climate Resilience Community has a vision to support efforts on climate actions and enable international partnerships (SDG 17), and move towards global interoperable open digital infrastructures providing climate resilience information on users demand. This pilot will contribute to establishing an OGC climate resilience concept store for the community where all appropriate climate information to build climate resilience information systems as open infrastructures can be found in one place, be it Information about data services, tools, software, handbooks, or a place to discuss experiences and needs. The concept store covers all phases of Climate Resilience, from initial hazards identification and mapping to vulnerability and risk analysis to options assessments, prioritization, and planning, and ends with implementation planning and monitoring capabilities. These major challenges can only be met through the combined efforts of many OGC members across government, industry, and academia.
This Call for Participation solicits interests from organizations to join the upcoming Climate Resilience Pilot, an OGC Collaborative Solution and Innovation Program activity. This six-months Pilot is setting the stage for a series of follow up activities. It therefore focuses on use-case development, implementation, and exploration. It answers questions such as:
What use-cases can be realized with the current data, services, analytical functions, and visualization capabilities that we have?
How much effort is it to realize these use-cases?
What is missing, or needs to be improved, in order to transfer the use-cases developed in the pilot to other areas?
IV.C. Objectives
The pilot has three objectives. First, to better understand what is currently possible with the available data and technology. Second, what additional data and technology needs to be developed in future to better meet the needs of the Climate Resilience Community; and third, to capture Best Practices, and to allow the Climate Community to copy and transform as many use-cases as possible to other locations or framework conditions.
IV.D. Background
With growing local communities, an increase in climate-driven disasters, and an increasing risk of future natural hazards, the demand for National Resilience Frameworks and Climate Resilience Information Systems (CRIS) cannot be overstated. Climate Resilience Information Systems (CRIS) are enabling data-search, -fetch, -fusion, -processing and -visualization. They enable access, understanding, and use of federal data, facilitate integration of federal and state data with local data, and serve as local information hubs for climate resilience knowledge sharing.
CRIS are already existing and operational, like the Copernicus Climate Change Service with the Climate Data Store. CRIS architectures can be further enhanced by providing climate scientific methods and visualization capabilities as climate building blocks. Based on FAIR principles, these building blocks enable in particular the reusability of Climate Resilience Information Systems features and capabilities. Reusability is an essential component when goals, expertises, and resources are aligned from the national to the local level. Framework conditions differ across the country, but building blocks enable as much reuse of existing Best Practices, tools, data, and services as possible.
Goals and objectives of decision makers vary at different scales. At the municipal level, municipal leaders and citizens directly face climate-related hazards. Aspects thus come into focus such as reducing vulnerability and risk, building resilience through local measures, or enhancing emergency response. At the state level, the municipal efforts can be coordinated and supported by providing funding and enacting relevant policies. The national, federal, or international level provides funding, science data, and international coordination to enable the best analysis and decisions at the lower scales.
Figure 2
Productivity and decision making are enhanced when climate building blocks are exchangeable across countries, organizations, or administrative levels (see Figure below). This OGC Climate Resilience Pilot is a contribution towards an open, multi-level infrastructure that integrates data spaces, open science, and local-to-international requirements and objectives. It contributes to the technology and governance stack that enables the integration of data including historical observations, real time sensing data, reanalyses, forecasts or future projections. It addresses data-to-decision pipelines, data analysis and representation, and bundles everything in climate resilience building blocks. These building blocks are complemented by Best Practices, guidelines, and cook-books that enable multi–stakeholder decision making for the good of society in a changing natural environment.
The OGC Innovation Program brings all groups together: The various members of the stakeholder group define use cases and requirements, the technologists and data providers experiment with new tools and data products in an agile development process. The scientific community provides results in appropriate formats and enables open science by providing applications that can be parameterized and executed on demand.
Figure 3
This OGC Climate Resilience Pilot is part of the OGC Climate Community Collaborative Solution and Innovation process, an open community process that uses the OGC as the governing body for collaborative activities among all members. A spiral approach is applied to connect technology enhancements, new data products, and scientific research with community needs and framework conditions at different scales. The spiral approach defines real world use cases, identifies gaps, produces new technology and data, and tests these against the real world use cases before entering the next iteration. Evaluation and validation cycles alternate and continuously define new work tasks. These tasks include documentation and toolbox descriptions on the consumer side, and data and service offerings, interoperability, and system architecture developments on the producer side. It is emphasized that research and development is not constrained to the data provider or infrastructure side. Many tasks need to be executed on the data consumer side in parallel and then merged with advancements on the provider side in regular intervals.
Good experiences have been made using OGC API standards in the past. For example, the remote operations on climate simulations (roocs) use OGC API Processes for subsetting data sets to reduce the data volume being transported. Other systems use OGC STAC for metadata and data handling or OGC Earth Observation Exploitation Platform Best Practices for the deployment of climate building blocks or applications into CRIS architectures. Still data handling regarding higher complex climate impact assessments within FAIR and open infrastructures needs to be enhanced. There is no international recommendation or best practice on usage of existing API standards within individual CRIS. It is the goal of this pilot to contribute to the development of such a recommendation, respecting existing operational CRIS that are serving heterogen user groups
Figure 4
.
IV.E. Technical Challenges
Realizing the delivery of Decision Ready Data on demand to achieve Climate Resilience involves a number of technical challenges that have already been identified by the community. A subset will be selected and embedded in use-cases that will be defined jointly by Pilot Sponsors and the OGC team. The goal is to ensure a clear value-enhancement pipeline as illustrated in Figure 1, above. This includes, among other elements, a baseline of standardised operators for data reduction and analytics. These need to fit into an overall workflow that provides translation services between upstream model data and downstream output — basically from raw data, to analysis-ready data, to decision-ready data. The following technical challenges have been identified and will be treated in the focus areas cycles of the Pilot accordingly:
Big Data Challenge: Multiple obstacles still exist, creating big barriers for seamless information delivery starting from Data Discovery. Here the emergence of new data platforms, new processing functionalities, and thus new products, data discovery remains a challenge. In addition to existing solutions based on established metadata profiles and catalog services, new technologies such as OGC’s Spatio-Temporal Asset Catalog (STAC) and open Web APIs such as OGC API Records will be explored. Furthermore, aspects of Data Access need to be solved where the new OGC API suite of Web APIs for data access, subsetting, and processing are currently utilized very successfully in several domains. Several code sprints have shown that server-side solutions can be realized within days and clients can interact very quickly with these server endpoints, thus development time is radically reduced. A promising specialized candidate for climate data and non-climate data integration has been recently published in the form of the OGC API — Environmental Data Retrieval (EDR). But which additional APIs are needed for climate data? Is the current set of OGC APIs sufficiently qualified to support the data enhancement pipeline illustrated in Figure 1? If not, what modifications and extensions need to be made available? How do OGC APIs cooperate with existing technologies such as THREDDS and OPEnDAP? For challenges of data spaces, Data Cubes have recently been explored in the OGC data cube workshop. Ad hoc creation and embedded processing functions have been identified as essential ingredients for efficient data exploration and exchange. Is it possible to transfer these concepts to all stages of the processing pipeline? How to scale both ways from local, ad hoc cubes to pan-continental cubes and vice versa. How to extend cubes as part of data fusion and data integration processes?
Cross-Discipline Data Integration: Different disciplines such as Earth Observation, various social science, or climate modeling use different conceptual models in their data collection, production, and analytical processes. How can we map between these different models? What patterns have been used to transform conceptual models to logical models, and eventually physical models? The production of modern Decision-ready information needs the integration of several data sets, including census and demographics, further social science data, transportation infrastructure, hydrography, land use, topography and other data sets. This pilot cycle uses ‘location’ as the common denominator between these diverse data sets and works with several data providers and scientific disciplines. In terms of Data Exchange Formats the challenge is to know what data formats need to be supported at the various interfaces of the processing pipeline? What is the minimum constellation of required formats to cover the majority of use cases? What role do container formats play? Challenging on technical level is also the Data Provenance. Many archives include data from several production cycles, such as IPCC AR 5 and AR 6 models. In this context, long term support needs to be realized and full traceability from high level data products back to the original raw data. Especially in context of reliable data based policy, clear audit trails and accountability for the data to information evolution needs to be ensured.
Building Blocks for processing pipelines: With a focus on Machine Learning and Artificial Intelligence which plays an increasing role in the context of data science and data integration. This focus area needs to evaluate the applicability of machine learning models in the context of the value-enhancing processing pipeline. What information needs to be provided to describe machine learning models and corresponding training data sufficiently to ensure proper usage at various steps of the pipeline? Upcoming options to deploy ML/AI within processing APIs to enhance climate services are rising challenges e.g. on how to initiate or ingest training models and the appropriate learning extensions for the production phase of ML/AI. Heterogeneity in data spaces can be bridged with Linked Data and Data Semantics. Proper and common use of shared semantics is essential to guarantee solid value-enhancement processes. At the same time, resolvable links to procedures, sampling & data process protocols, and used applications will ensure transparency and traceability of decisions and actions based on data products. What level is currently supported? What infrastructure is required to support shared semantics? What governance mechanisms need to be put in place?
IV.F. How is this Pilot Relevant to the Climate Resilience Domain Working Group?
The Climate Resilience DWG will concern itself with technology and technology policy issues, focusing on geospatial information and technology interests as related to climate mitigation and adaptation as well as the means by which those issues can be appropriately factored into the OGC standards development process.
The mission of the Climate Resilience DWG is to identify geospatial interoperability issues and challenges that impede climate action, then examine ways in which those challenges can be met through application of existing OGC Standards, or through development of new geospatial interoperability standards under the auspices of OGC.
Activities to be undertaken by the Climate Resilience DWG include but are not limited to:
Identify the OGC interface standards and encodings useful to apply FAIR concepts to climate change services platforms;
Liaise with other OGC Working Groups (WGs) to drive standards evolution;
Promote the usage of the aforementioned standards with climate change service providers and policy makers addressing international regional and local needs;
Liaise with external groups working on technologies relevant to establishing ecosystems of EO Exploitation Platforms;
Liaise with external groups working on relevant technologies;
Publish OGC Technical Papers, Discussion Papers or Best Practices on interoperable interfaces for climate change services;
Provide software toolkits to facilitate the deployment of climate change services platforms.
Engineering report for OGC Climate Resilience Pilot
1. Excecutive Summary
In recognizing the impacts of climate change and monitoring the environment to achieve Sustainable Development Goals collaborative solutions are required. This calls for standardized data and tools. But how can we ensure an effective exchange of reliable information across disciplines without sacrificing individual users’ needs? Climate services require vast volumes of data from different providers to be processed by various scientific ecosystems: Raw data needs to be transformed into analysis ready data and from there into indicators that is ready for supporting decisions’. To provide a processing infrastructure that supports collaboration, we need standards based on the principles of being findable, accessible, interoperable, and reusable. OGC standards are aligned with these principles allowing for the reuse of data refinement features across countries, organizations, and administrative levels.
In this OGC Climate Resilience Pilot, which had been the first phase of multiple long term climate activities the work was aiming to evolve geospatial data, technologies, and other capabilities into valuable information for decision makers, scientists, policy makers, data providers, software developers, and service providers. It had been shown how data piplines could be established to produce dedicated information out of the massive amount of availabel raw data. It has been shown how raw data from multiple sources can be organised into data processing pipelines to bring them in formats ready for analysis. different aspects of GEODataCubes are discussed to emphseise the necessesity of analysis ready data and decission ready indicators. Scicetific related aspects of climate impact are beeing discussed with the use cases of droughts, floods and wildfires, where assessment tools and the complexity of climate indices are layed out.
With the target user group of non technical decission makers, the workflow from data to visulisation is beeing shown at several chapters of this report. A dadicated chaper is pointing out the options and challenges of usage on artifical intelligence to establsh a 5D meta world where the efficiency of climate action can be simulated. Reduction of disaster risks du to technical engineering constructions like dams are able to be simulated. Climate resilience is not only an aspect of shift of meteorological phenomena but also related to land degradation and loss of biodiversity. Therefor the vegetation is been pointed out and options of 3D vegetation simulation is shown and how different species are surviving under changing cliamte condictions. It could be shwon that smale scale urban planning is beeing supported by the data to visualisation application where single tree species are representing the real or simulated situation of a small scale area. The pilot is showing studies of Los Angeles.
The pilot points out the challenges on how to bring information to the decission makers. Even a dedicated chapter and focus on comunication is included in the pilots work. Speceal approaches showing the comunication options to non technical people who are in charege of local climate ressilince action startegies.
The cliamte resilince pilot could show how to make valuable, informed decisions to improve climate action, especially by helping the location community develop more powerful visualization and communication tools to accurately address ongoing climate threats such as heat, drought, floods, fires.
2. Contributors
| Name | Organization | Role or Summary of contribution |
|---|---|---|
| Guy Schumann | RSS-Hydro | Lead ER Editor |
| Albert Kettner | RSS-Hydro/DFO | Lead ER Editor |
| Timm Dapper | Laubwerk GmbH | |
| Zhe Fang | Wuhan University | |
| Hanwen Xu | Wuhan University | |
| Tianyu Tuo | Wuhan University | |
| Dean Hintz | Safe Software, Inc. | |
| Kailin Opaleychuk | Safe Software, Inc. | |
| Jérôme Jacovella-St-Louis | Ecere Corporation | |
| Hanna Krimm | alpS GmbH | |
| Andrew Lavender | Pixalytics Ltd | |
| Samantha Lavender | Pixalytics Ltd | Development of drought indicator |
| Jenny Cocks | Pixalytics Ltd | |
| Jakub Walawender | Walawender, Jakub P. | |
| Eugene Yu | GMU | |
| Gil Heo | GMU | |
| Glenn Laughlin | Pelagis Data Solutions | |
| Patrick Dion | Ecere | |
| Tom Landry | Intact Financial Corporation | |
| Nils Hempelmann | OGC | Climate resilience Pilot Coordinator |
2.1. About Laubwerk
Laubwerk is a software development company whose mission is to combine accurate, broadly applicable visualizations of vegetation with deeper information and utility that goes far beyond their visual appearance. We achieve this through building a database that combines ultra-realisting 3D representation of plants with extensive metadata that represents plant properties. This unique combination makes Laubwerk a prime partner to bridge the gap from data-driven simulation to eye-catching visualizations.
3. Components
The various organizations and institutes that contribute to the Climate Resilience Pilot are described below. There input to the pilot is indicated in the figure below Figure 5.
Figure 5 — CRIS overview
3.1. Component workflow
The figure below shows a high level workflow diagram that illustrates the interactions between data, models and the various components.
Figure 6 — High level workflow diagram that illustrates the interactions between data, models and the various components
4. Raw data to datacubes
4.1. Jakub P. Walawender
Component: Solar climate atlas for Poland.
Inputs: In situ solar radiation and sunshine duration data, satellite-based solar radiation and sunshine duration estimates (climate data records), various different geospoatial data from different sources (e.g. digital elevation model, climate zones, etc.).
Outputs:
This pilot outputs: Review of available solar radiation datasets and web services, 2 scripts (solar climate data exploratory analysis tool, solar climate data preprocessing tool), report summarizing results of the exploratory data analysis and quality control including discussion of inconsistency factors.
In the final result: solar radiation data cube for Poland (40 years of high resolution dataset for selected solar radiation variables), and analysis ready data (dedicated products for different solar-smart applications in the fields of renewable energy, agriculture, spatial planning, tourism, etc.), detailed analysis of the solar climate in Poland (incl. solar regionalisation) and online web map service with an interactive, self-explainable interface enabling easy on-demand information access.
What other component(s) can interact with the component: This component work (considering the final result) crosses all the components and all of them are actually important.
What OGC standards or formats does the component use and produce:
NetCDF compliant with the CF (Climate and Forecast) convention.
WMS, WCS, OGC API
4.2. Ecere Corporation
Ecere is providing a deployment of its GNOSIS Map Server with a focus on a Sentinel-2 Level 2A data cube. OGC API - Tiles, OGC API - Coverages, OGC API - Maps, OGC API — Discrete Global Grid Systems, Common Query Language (CQL2), and OGC API — Processes — Part 3: Workflows & Chaining are the supported standards and extensions for this task.
The plan is to use machine learning process output from the Wildland Fire Fuel Indicator Workflow to identify vegetation fuel types from sentinel-2 bands, then combine with weather data to assess wildfire hazards risk in Australia. The workflow will use as input the sentinel-2 OGC API data cube from our GNOSIS Map Server.
Component: Data Cube and Wildfire vegetation fuel map / risk analysis.
Inputs: ESA Sentinel-2 L2A data (from AWS / Element 84), Temperature / Precipitation / Wind climate data, Reference data for training: vegetation fuel type classification, wildfire risk.
The sentinel-2 Level 2A collection is provided at https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a
Outputs: OGC API (Coverage, Tiles, DGGS, Maps) for Sentinel-2 data (https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a) including full global coverage, all resolutions/scales, all bands that can be individually selected, CQL2 expressions for band arithmetics; climate data (to be added), vegetation fuel type (possibly by end of pilot, or for DP2023), wildfire risk workflow (possibly by end of pilot, or for DP2023).
What other component(s) can interact with the component: Any OGC API client component requiring efficient access to Sentinel-2 data, clients requiring climate data once made available, clients presenting vegetation fuel type, wildfire risk (once ready, might extend into DP2023).
What OGC standards or formats does the component use and produce:
OGC API (Coverage — with subsetting, scaling, range subsetting, coverage tiles; Tiles, DGGS (GNOSISGlobalGrid and ISEA9R), Maps (incl. map tiles), Styles), CQL2, OGC API — Processes with Part 3 for workflows (Nested Local/Remote Processes, Local/Remote Collection Input, Collection Output, Input/Output Field Modifiers)
Formats: GNOSIS Map Tiles (Gridded Coverage, Vector Features, Map imagery, and more); GeoTIFF; PNG (16-bit value single channel for coverage, RGBA for maps); JPEG.
4.2.1. Overview of standards and extensions available for outputs
4.2.1.1. OGC API — DGGS
There are two main requirements classes for this standard.
Data Retrieval (What is here? — ”give me the data for this zone”),
Zones Query (Where is it? — ”which zones match this collection and/or my query”)
Example of data retrieval queries:
https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/dggs/GNOSISGlobalGrid/zones/3-4-11/data https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/dggs/ISEA9Diamonds/zones/E7-FAE/data
Figure 7
Example of a zones query:
https://maps.gnosis.earth/ogcapi/collections/SRTM_ViewFinderPanorama/dggs/ISEA9Diamonds/zones https://maps.gnosis.earth/ogcapi/collections/SRTM_ViewFinderPanorama/dggs/ISEA9Diamonds/zones?f=json (as a list of compact JSON IDs)
Figure 8
Level, Row, Column (which encoded differently in the compact hexadecimal zone IDs) can be seen on the zone information page at:
https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/dggs/GNOSISGlobalGrid/zones/3-4-11 https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/dggs/ISEA9Diamonds/zones/E7-FAE
Figure 9
There are several different discrete global grids. Two are implemented in our service:
Our GNOSIS Global Grid, which is geographic rather than projected, and is axis-aligned with latitudes and longitudes, but not equal area (though it tends towards equal area — maximum variation is ~48% up to a very detailed level)
ISEA9R, which is a dual DGGS of ISEA3H even levels, using rhombuses/diamonds instead of hexagons, but much simpler to work with and can transport the hexagon area values as points on the rhombus vertices for those ISEA3H even levels. It is also axis-aligned to a CRS defined by rotating and skewing the ISEA projection.
The primary advantage of OGC API — DGGS is:
for retrieving data from DGGS that are not axis-aligned or have geometry that cannot be represented as squares (e.g., hexagons), or
for the zone query capability, most useful for specifying queries (e.g. using CQL2). The extent to which we implement Zones Query at this moment is still limited.
Examples of DGGS Zone information page:
Figure 10 — GNOSIS Map Server information resource for GNOSIS Global Grid zone 5-24-6E
Figure 11 — GNOSIS Map Server information resource for ISEA9Diamonds zone 5-24-6E
Figure 12 — GNOSIS Map Server information resource for ISEA9Diamonds zone 5-24-6E sections
4.2.1.2. OGC API — Coverages with OGC API — Tiles
Because they are axis-aligned, both of these DGGS can be described as a TileMatrixSet, and therefore equivalent functionality to the OGC API — DGGS Data Retrieval requirements class can be achieved using OGC API — Tiles and the corresponding TileMatrixSets instead.
Coverage Tile queries for the same zones:
https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/GNOSISGlobalGrid/3/4/17 https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/ISEA9Diamonds/4/373/288
Figure 13
To request a different band than the default RGB (B04, B03, B02) bands:
https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/GNOSISGlobalGrid/3/4/17?properties=B08 https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/ISEA9Diamonds/4/373/288?properties=B08
Figure 14
To retrieve coverage tiles with band arithmetic to compute NDVI:
https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/GNOSISGlobalGrid/3/4/17?properties=(B08/10000-B04/10000)/(B08/10000+B04/10000) https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/ISEA9Diamonds/4/373/288?properties=(B08/10000-B04/10000)/(B08/10000+B04/10000)
Figure 15
4.2.1.3. OGC API — Maps with OGC API — Tiles
Map Tiles queries for the same zones:
https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/map/tiles/GNOSISGlobalGrid/3/4/17 https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/map/tiles/ISEA9Diamonds/4/373/288
Figure 16
Figure 17 — GNOSIS Map Server Map of tiles 3/4/17 in GNOSISGlobalGrid
To retrieve a map of the Scene Classification:
https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/styles/scl/map/tiles/GNOSISGlobalGrid/3/4/17 https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/styles/scl/map/tiles/ISEA9Diamonds/4/373/288
Figure 18
Figure 19 — Sentinel-2 with image classification styling
To filter out the clouds:
https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/map/tiles/GNOSISGlobalGrid/3/4/17?filter=SCL<8 or SCL >10 https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/map/tiles/ISEA9Diamonds/4/373/288?filter=SCL<8 or SCL >10
Figure 20
To get an NDVI map:
https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/styles/ndvi/map/tiles/GNOSISGlobalGrid/3/4/17 https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/styles/ndvi/map/tiles/ISEA9Diamonds/4/373/288
Figure 21
Figure 22 — Sentinel-2 map with NDVI band arithmetic
The same filter= and properties= should also work with the /coverage and /dggs end-points. The filter= also works with the /map end-points.
4.3. Wuhan University (WHU)
Wuhan University (WHU) is a university that plays a significant role in researching and teaching all aspects of surveying and mapping, remote sensing, photogrammetry, and geospatial information sciences in China. In this Climate Resilience Pilot, we will contribute two use-cases: a use-case for drought and wildfire impact, and a use-case for analysis ready data.
Component: Data Cube and Drought Indicator.
Inputs: Climate data, including precipitation and temperature. Optical data, such as Landsat-8 and sentinel-2.
Outputs: Drought risk map and other results in the form of GeoTIFF after processing in a Data Cube.
What other component(s) can interact with the component: .
What OGC standards or formats does the component use and produce:
OGC API — Coverages to provide the data in Cube
OGC API — Processes to provide the calculation of drought indices
5. Raw data to Analysis Ready Data (ARD)
CEOS defines Analysis Ready Data as satellite data that have been processed to a minimum set of requirements and organized into a form that allows immediate analysis with a minimum of additional user effort and interoperability both through time and with other datasets. See https://ceos.org/ard/, and especially the infornation for data producers: https://ceos.org/ard/files/CARD4L_Info_Note_Producers_v1.0.pdf.
Several past successful OGC testbeds, including the DP 21 to which this pilot is linked, have looked at ARD and IRD but also in terms of use cases. In this pilot, some main technical contributions have been creating digestible OGC data types and formats for specific partner use cases, so producing ARD from publically available EO and model data, including hydrological and other type of model output as well as climate projections.
These ARD will feed into all use cases for all participants, with a particular focus toward the use cases proposed for Heat, Drought and Health Impacts by participants in the pilot.
Specifically, participants, like RSS-Hydro, provide access to the following satellite and climate projection data: * Wildfire – Fire Radiant Power (FRP) product from Sentinel 3 (NetCDF), 5p, MODIS products (fire detection), VIIRS (NOAA); possibly biomass availability (fire fuel). * Land Surface Temp — Sentinel 3 * Pollution — Sentinel 5p * Climate Projection data (NetCDF, etc., daily downscaled possible): air temp (10 m above ground). Rainfall and possibly wind direction as well * Satellite-derived Discharge Data to look at Droughts/Floods etc. by basin or other scale * Can provide some hydrological model simulation outputs at (sub)basin scale (within reason)
The created ARD in various OGC interoperable formats will create digestible dataflows for the proposed OGC Use Cases. This proposed data chain by RSS-Hydro is similar to DP21. The created climate and hydrological basin model outputs (NetCDF etc.) or EO remote sensed data (NASA, NOAA, ESA, etc.) from among other sources the Global Flood Observatory (DFO) and RSS-Hydro can be simplified to GeoTIFF and / or vectorized geopackage per time step by the FME software. Another option as an intermediate data type (IRD) would be COG — cloud optimized geotiff which would make access more efficient. The COG GeoTIFFs are optimized for cloud so we could make sure we have a cloud based storage bucket to make the data sharing more efficient. ARD and IRD should become more service / cloud based wherever possible.
Besides the data format we need to think more about data structures and semantics required to support the desired DRI’s. The time series / raster, and classification to vector contour transform is an approach that worked well in DP21 and may be a good starting point here. For example, together in the FME processing engine, we can take time series grids, aggregate them across timesteps to perhaps mean or max values, then classify them into ranges suitable for decision making, and then write them out and expose them as time tagged vector contour tables.
In summary, the different ARD and IRD data can be created from the following data sources: * Inputs: EO (US sources fire related: MODIS, VIIRS); Climate projections, sub catchment polygons, assisting Albert with EO Europe sources; Sentinel-3, Sentinel 5-P * Outputs forma & instances: WCS, GeoTIFF spatial / temporal subset, Shape; NetCDF * Output parameters: e.g. hydrological condition of a basin (historically/current). So drought / flood etc. * Output themes: downscaled / subset outputs, hydrologic scenarios
5.1. GMU_CSISS
Component: Analysis Ready Data (ARD).
Inputs: ECV record information, OpenSearch service endpoint (currently CMR(CWIC) and FedEO), download URLs for accessing NetCDF or HDF files.
Outputs: WCS service endpoint for accessing selected granule level product images (GeoTIFF, PNG, JPEG, etc.).
What other component(s) can interact with the component: .
What OGC standards or formats does the component use and produce:
WCS for downloading image
WMS for showing layers on basemap
5.2. Pixalytics
Pixalytics have developed an OGC-compliant Application Programming Interface (API) service, see Figure 23, which will provide global information on droughts. The approach is to take global open data/datasets from organizations such as ESA/Copernicus, NASA/NOAA, and the WMO and combine meteorology, hydrology, and remote sensing data to produce ARD data based on a composite of different indicators. Where globally calculated drought indicators already exist, these are being used in preference to their re-calculation, although consistency and the presence of uncertainties are also being considered.
Figure 23 — Pixalytics drought severity workflow architecture
The Drought Severity Workflow (DSW) is built using individual drought indicators for precipitation (SPI), soil moisture (SMA), and vegetation drought that are together using the Combined Drought Indicator (CDI) as described by [Sepulcre-Canto 2012]. The API access has been set up following the Building Blocks for Climate Services (https://climateintelligence.github.io/smartduck-docs/) approach.
Component: D100 Drought indicator.
Inputs: Meteorological data, including Precipitation, plus Land Surface Temperature, Soil Moisture, and Vegetation Index (or optical data to calculate it from).
Outputs: Drought Indices — default is CDI — as a time-series dataset output in a choice of download able formats: CSV, GeoJSON (default), CoverageJSON and NetCDF for point data and then COG for areas (to be developed).
What other component(s) can interact with the component: a desire to link to visualization/DRI analysis components. A QGIS plugin has been updated to be able to perform a request and view the outputted JSON file (https://github.com/pixalytics-ltd/qgis-wps-plugin), and the Web Processing Service (WPS) link is https://api.pixalytics.com/climate/wps?request=GetCapabilities=wps
An example Python query for a location in Canada (Latitude: 55.5 N Longitude: 99.1 W) for the SPI time series, with data for these dates/this location already cached, so runs quicker:
from owslib.wps import WebProcessingService, monitorExecution
# contact the WPS client
wps = WebProcessingService("http://api.pixalytics.com/climate/wps", skip_caps=True)
# GetCapabilities
wps.getcapabilities()
# Execute
inputs = [ ("start_date", '20200101'),
("end_date", '20221231'),
("latitude", '55.5'),
("longitude", '-99.1')]
execution = wps.execute("drought", inputs, "output")
outfile = "temp.json"
monitorExecution(execution,download=True,filepath=outfile)
# Wait 5 seconds and check
execution.checkStatus(sleepSecs=5)
# show status
print('Percent complete {}%, generated {}'.format(execution.percentCompleted, outfile))
# If there's an error print the error information
for error in execution.errors:
print("Error: ",error.code, error.locator, error.text)
Figure 24 — Drought indicator calling code that generated the default output, which is the CDI in GeoJSON format
What OGC standards or formats does the component use and produce: Producing data on-the-fly using the WPS, so need to pull data through preferably an API route. The speed that the input data can be made available (i.e., extracting time-series subsets) governs the speed that the drought indicator provides data. To speed this up, input data that is not changing is being cached so that it runs significantly quicker when the API is called for a second time.
Figure 25 shows an example of the output visualized within Python using Streamlit with the intermediate data (cached as NetCDF files) as input.
Figure 25 — Pixalytics output of the CDI for a point location in Canada (Latitude: 55.5 N Longitude: 99.1 W); generated using Copernicus Emergency Management Service information [2023]
5.2.1. Data Sources
The Global Drought Observatory
The Global Drought Observatory (GDO), owned by the Copernicus Emergency Management Services, provides a global map of coarsely-gridded agricultural drought risk, along with a breakdown of the risk for each country. The drought risk is computed using the CDI, with the variables used to compute it and other drought-related variables provided in the user portal for download, but the CDI itself is not available for download and so is being calculated in the DSW.
Figure 26 — Global Drought Observatory Web Portal, https://edo.jrc.ec.europa.eu/gdo/php/index.php?id=2001
We obtain SMA and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) from the GDO data download service. These are provided as netCDF files and contain pre-computed anomalies, so can be assimilated directly into the back-end. The SMA uses a combination of the root soil moisture from the LISFLOOD model, the MODIS land surface temperature and the ESA Climate Change Initiative (CCI) skin soil moisture [Cammalleri 2016], and the FAPAR is from NASA optical imagery.
ERA5 Reanalysis from ECMWF CDS
The CDS portal provides an API interface to return either hourly or monthly averages of the ERA5 variables. Requesting the hourly data is necessary to compute anything which requires a frequency greater than monthly, which is the case for most drought indicators (e.g. SMA) which are in dekads. To ensure there is no anti-aliasing, the full 24hr dataset for each day of the month must be downloaded. This is very time-consuming and requests will fail if the number of data points exceeds the limit, which will occur for a period of 2 years or more, even for a single location.
There is a separate application, which can also be accessed via API, to return daily data. The CDS employs a queue management system, which determines the priority of each request based partially on the computational demand of the request. The daily data retrieval relies upon an underlying service to compute the daily statistics from the hourly data, demanding more resources than simply extracting the hourly or monthly data which are pre-computed. This means the request is held in the queue for a long time (up to hours), so there is no time benefit over using the hourly data. However, for a longer time-period which would be rejected if requested hourly, this provides a workaround. A further benefit of requesting daily, rather than hourly, data is that the downloaded file is smaller.
We compute SPI and SMA using variables from the CDS API. The SPI is computed from the total precipitation in monthly intervals. The SMA is computed from the soil water volume, which is available for 4 depth levels. The SMA for each depth is computed by calculating the z-score against a long term mean, using the same baseline time period as the SPI. The most relevant depth layer can then be selected by the user; for instance, a user interested in the health of crops with shallow roots may wish to access the surfacemost layer.
ERA5 Reanalysis from AWS
Input precipitation data was also tested using ERA5 data held within the Registry of Open Data on AWS versus the CDS API and found the Amazon Web Service (AWS) Simple Storage Service (S3) stored data could be accessed faster once virtual Zarrs has been setup, but there is a question over provenance as the data on AWS was put there by an organization other than the data originator and the Zarr approach didn’t work for more recent years as the S3 stored NetCDFs have chunking that is inconsistent. An issue was raised for the Python kerchunck library, to be able to cope with variable chunking, as this isn’t current supported.
NOAA
In Progress
SafeSoftware
In Progress
5.2.2. Further work
The work in this Pilot has focused on building this initial version of the workflow, deploying it via WPS and pulling data from different sources to understand the advantages and disadvantages of the different sources, including straightforwardness and speed of accessibility. For future Pilot activities we plan to continue to build the robustness of the approach, including testing and improving on the robustness of the interfaces to the input data sources and output provided to other Pilot components.
The current work has focused on the extraction and generation of a point time-series, and so there are plans to expand the code to the extraction and generation of a 3D data cube. This might involve changing the output API interface to the OGC Environmental Data Retrieval (EDR) API standard.
5.3. Safe Software
Component:
Climate ARD component — Data Cube to ARD.
Impact Components general I/O (Heat, Drought, Flood).
Inputs:
Climate ARD component — Data Cube to ARD: Climate scenario data from climate services (NetCDF), for historic and future time periods
Impact Components general I/O (Heat, Drought, Flood): Climate impact ARD from Safes ARD component, including EO data (MODIS, LANDSAT, SENTINEL products), Population/Infrastructure information (OSM), Basemaps, as well as specific requirements per impact:
Drought: vegetation, soils, hydrology, basins
Flood: DEM, hydrology, basins.
Outputs:
Climate ARD component — Data Cube to ARD: Gridded data, including temperature, soil moisture and precipitation — aggregate grids (GeoTIFF/COG), as well as Vector data, including temperature, soil moisture and precipitation contours (Geopackage, GeoJSON, OGC API Features).
Impact Components general I/O (Heat, Drought, Flood): Risk Contours (Geopackage, GeoJSON, OGC API Features).
What other component(s) can interact with the component: Pixalytics Component: consume variables for Drought Indicator produced by Safe’s ARD component. Any other component that requires climate scenario summary ARD to drive DRI.
What OGC standards or formats does the component use and produce:
OGC API Features
Geopackage
NetCDF
GeoJSON
GeoTIFF/COG
As needed: GML, KML, PostGIS, geodatabase and about 400 other geospatial formats.
Figure 27 — High level FME ARD workflow showing generation of climate scenario ARD and impacts from climate model, EO, IoT, infrastructure and base map inputs
5.3.1. Company Description
Using the FME platform, Safe Software has been a leader in supporting geospatial interoperability for more than 25 years. A central goal has been to promote FAIR principles, including data sharing across barriers and silos, with unparalleled support for a wide array of both vendor specific formats and open standards. Safe Software also provides a range of tools to support interoperability workflows. FME Workbench is a graphical authoring environment that allows users to rapidly prototype transformation workflows in a no-code environment. FME Server then allows users to publish data transforms to enterprise oriented service architectures. FME Cloud offers a low cost, easy to deploy and scalable environment for deploying transformation and integration services to the cloud.
Open standards have always been a core strategy for Safe in order to support data sharing. SAIF (Spatial Archive Interchange Format) — the first format FME was built to support and the basis for the company name — was an open BC government standard that ultimately served as a basis for GML. We have supported open standards such as XML, JSON and OGC standards such as GML, KML, WMS, WFS for many years. Safe has collaborated over the years with the open standards community. For example, we have actively participated in the CityGML and INSPIRE communities in Europe. We have also been active within the OGC community and participated in many OGC initiatives including Maritime Limits and Boundaries, IndoorGML pilots and most recently the 2021 Disaster Pilot. Safe also actively participates in a number of Domain and Standards working groups including CityGML SWG, MUDDI SWG, 3DIM, EDM, Digital Twins, Health DWGs to name a few.
5.3.2. Component Descriptions
D100 — Client instance: Analysis Ready Data Component
Our Analysis Ready Data component (ARD) uses the FME platform to consume regional climate model and EO data and generate FAIR datasets for downstream analysis and decision support.
The challenge to manage and mitigate the effects of climate change poses difficulties for spatial and temporal data integration. One of the biggest gaps to date has been the challenge of translating the outputs of global climate models into specific impacts at the local level. FME is ideally suited to help explore options for bridging this gap given its ability to read datasets produced by climate models such as NetCDF or OGC WCS. Then aggregate, interpolate and restructure it as needed, inter-relate it with higher resolution local data, and then output it to whatever format or service is most appropriate for a given application domain or user community.
Our ARD component supports the consumption of climate model outputs such as NetCDF, earth observation (EO) data, and the base map datasets necessary for downstream workflows including derivation of analysis ready datasets for impact analysis. It filters, interrelates and refines these datasets according to indicator requirements. After extraction, datasets are filtered by location and transformed to an appropriate resolution and CRS. Then the workflow classifies, resamples, simplifies and reprojects the data, and then defines feature IDs metadata and other properties to satisfy the target ARD requirements. This workflow is somewhat similar to what was needed to evaluate disaster impacts in DP21. Time ranges for climate scenarios are significantly longer — years rather than weeks for floods.
Once the climate model, and other supporting datasets have been adequately extracted, prepared and integrated, the final step is to generate the data streams and datasets required by downstream components and clients. The FME platform is well suited to deliver data in formats as needed. This includes Geopackage format for offline use. For online access, other open standards data streams are available, such as GeoJSON, KML or GML, via WFS and OGC Features APIs and other open APIs.
As our understanding of end user requirements continues to evolve, this will necessitate changes in which data sources are selected and how they are refined, using a model based rapid prototyping approach. We anticipate that any operational system will need to support a growing range of climate change impacts and related domains. Tools and processes must be able to absorb and integrate new datasets into existing workflows with relative ease. As the pilot develops, data volumes will increase, requiring scalability methods to maintain performance and avoid overloading downstream components. Cloud based processing near cloud data sources using cloud native datasets (COG, STAC, etc) supports data scaling. Regarding the FME platform, this involves deployment of FME workflows to FME Cloud.
It is worth underlining that our ARD component depends on the appropriate data sources in order to produce the appropriate decision ready data (DRI) for downstream components. Risk factors include being able to locate and access suitable climate models and EO data of sufficient quality, resolution and timeliness to support indicators as the requirements and business rules associated with them evolve. Any data gaps encountered are documented under the lessons learned section.
Figure 28 — Environment Canada NetCDF GCM time series downscaled to Vancouver area. From: https://climate-change.canada.ca/climate-data/#/downscaled-data
Figure 29 — Data Cube to ARD: NetCDF to KML, Geopackage, GeoTIFF
Data workflow: - Split data cube - Set timestep parameters - Compute timestep stats by band - Compute time range stats by cell - Classify by cell value range - Convert grids to vector contour areas by class
Figure 30 — Extracted timestep grids: Monthly timesteps, period mean T, period max T
Figure 31 — Convert raster temperature grids into temperature contour areas by class
Figure 32 — Geopackage Vector Area Time Series: Max Yearly Temp
5.3.3. D100 — Client Instance: Heat Impact Component
This component takes the climate scenario summary ARD results from the ARD component and analyzes them to derive estimated heat impacts over time, based on selected climate scenarios. Central to this is the identification of key heat impact indicators required by decision makers and the business rules needed to drive them. Process steps include data aggregation and statistical analysis of maximum temperature spikes, taking into account the cumulative impacts of multiple high temperature days. Data segmentation is based on maximum temperature exceeding a certain threshold T for N days in a row. This is because heat exhaustion effects are likely dependent on duration of heat spells, in addition to high maximum temperatures on certain days.
Figure 33 — ARD Query: Monthly Max Temp Contours
Figure 34 — ARD Query: Max Mean Monthly Temp > 25C
Figure 35 — Town of Lytton - location where entire town was devastated by fire during the heat wave of July 2021 - same location highlighted in ARD query from heat risk query in previous figure
5.3.4. D100 — Client Instance: Flood and Water Resource Impact Component
This component takes the climate scenario summary ARD results from the ARD component and analyzes them to derive estimated flood risk impacts over time, based on selected climate scenarios. Central to this will be the identification of key flood risk impact indicators required by decision makers and the business rules needed to drive them. This process includes data aggregation and statistical analysis of rainfall intensity over time, taking into account the cumulative impacts of multiple consecutive days. This involves, for example, data segmentation based on cumulative rainfall exceeding a certain threshold T within a certain time window (N hours or days), since cumulative rainfall and rainfall intensity over a short period are often more crucial than total rainfall over a longer period. These precipitation scenarios are evaluated by catch basin. This also requires integration with topography, DEMs, and hydrology related data such as river networks, water bodies. aquifers and watershed boundaries.
The FME transformation workflow classifies and segments the time series grid data, followed by vectorization and generalization in order to generate flood contour polygons by time step. The results are loaded to a geopackage which is more readily consumable by a wider variety of GIS applications and analytic tools. We have found that this vectorized data is relatively easy to publish to OGC API Feature Services.
Figure 36 — FME approach for converting flood time series grids to geopackage ARD
Figure 37 — Flood Contour Geopackage ARD, showing flooded areas south of Winnipeg by date and depth, as displayed in FME Data Inspector.
5.3.5. D100 — Client Instance: Drought Impact Component
This component takes the climate scenario summary ARD results from the ARD component and analyze them to derive estimated drought risk impacts over time based on selected climate scenarios. This involves, for example, data segmentation based on cumulative rainfall below a certain threshold T within a certain time window (days, weeks or months), since cumulative rainfall over time will be crucial for computing water budgets by watershed or catch basin. Besides precipitation, climate models also generate soil moisture predictions which are used by this component to assess drought risk. This also requires integration with topography, DEMs and hydrology related data such as river networks, water bodies. aquifers and watershed boundaries. The specific business rules used to assess drought risk are still under development. FME provides a flexible data and business rule modeling framework. This means that as indicators and drought threshold rules are refined, it’s relatively straightforward to adjust the business rules in this component to refine our risk projections. Also, business rule parameters can be externalized as execution parameters so that end users can control key aspects of the scenario drought risk assessment without having to modify the published FME workflow.
5.4. Wuhan University (WHU)-Component
Wuhan University (WHU) is a university that plays a significant role in researching and teaching all aspects of surveying and mapping, remote sensing, photogrammetry, and geospatial information sciences in China. In this Climate Resilience Pilot, WHU will contribute three components (ARD, Drought Indicator, and Data Cube) and one use-case (Drought Impact Use-cases).
5.4.1. Component: ARD
Inputs: Gaofen L1A data and Sentinel-2 L1C data
Outputs: Surface Reflectance ARD
What other component(s) can interact with the component: Any components requiring access to surface reflectance data
Surface Reflectance (SR) is the fraction of incoming solar radiation reflected from the Earth’s surface for specific incidents or viewing cases. It can be used to detect the distribution and change of ground objects by leveraging the derived spectral, geometric, and textural features. Since a large amount of optical EO data has been released to the public, ARD can facilitate interoperability through time and multi-source datasets. As the probably most widely applied ARD product type, the SR ARD can contribute to climate resilience research. For example, the SR-derived NDVI series can be applied to monitor wildfire recovery by analyzing vegetation index increases. Several SR datasets have been assessed as ARD by CEOS, like the prestigious Landsat Collection 2 Level 2, and Sentinel-2 L2A, while many other datasets are still provided at a low processing level.
WHU is developing a pre-processing framework for SR ARD generation. The framework supports radiometric calibration, geometric ratification, atmospheric correction, and cloud mask. To address the inconsistencies in observations from different platforms, including variations in band settings and viewing angles, we proposed a processing chain to produce harmonized ARD. This will enable us to generate SR ARD with consistent radiometric and geometric characteristics from multi-sensor data, resulting in improved temporal coverage. In the first stage of our mission, we are focusing on the harmonization of Chinese Gaofen data and Sentinel-2 data, as shown in Figure 1, the harmonization involves spatial co-registration, band conversion, and bidirectional reflectance distribution function (BRDF) correction. Figure 2 shows the Sentinel-2 data before and after pre-processing. Furthermore, we wish to seek the assessment of CEOS-ARD in our long-term plan.
Figure 38 — The processing chain to produce harmonized ARD.
Figure 39 — Sentinel-2 RBG composite (red Band4, green Band3, blue Band2), over Hubei, acquired on October 22, 2020. (a) corresponds to the reflectance at the top of the atmosphere (L1C product); (b) corresponds to the surface reflectance after pre-processing.
5.4.2. Component: Drought Indicator
Inputs: Climate data, including precipitation and temperature
Outputs: Drought risk map derived from drought indicator
What other component(s) can interact with the component: Any components requiring access to drought risk map through OGC API
What OGC standards or formats does the component use and produce: OGC API — Processes
Drought is a disaster whose onset, end, and extent are difficult to detect. Original meteorological data, such as precipitation, can be obtained through satellites and radar, which can be used for drought monitoring. However, the accuracy is easily affected by detection instruments and terrain occlusion, and the ability to retrieve special shapes, such as solid precipitation, is limited. In addition, many meteorological monitoring stations on the ground can provide local raw meteorological observation data. The SPEI is a model to monitor, quantitatively analyze, and determine the spatiotemporal range of the occurrence of drought using meteorological observation data from various regions. It should supplement the result of drought monitoring with satellite and radar.
SPEI has two main characteristics: 1) it considers the deficits between precipitation and evapotranspiration comprehensively, that is, the balance of water; 2) multi-time scale characteristics. For 1) drought is caused by insufficient water resources. Precipitation can increase water, while evapotranspiration can reduce water. The differences between the two variables simultaneously and in space can characterize the balance of water. For 2), the deficits value of different usable water sources is distinct at different time scales due to the different evolution cycles of different types, resulting in various representations in temporal. By accumulating the difference between precipitation and evapotranspiration at different time scales, agricultural (soil moisture) droughts, hydrological (groundwater, streamflow, and reservoir) droughts, and other droughts can be distinguished by SPEI.
In our project, the dataset for SPEI calculation is ERA5-Land monthly averaged data from 1950 to the present. We selected years of data about partial areas of East Asia for experiments. Through the following flow of the SPEI calculation, we obtain the SPEI value for assessments of drought impact. The flow of the SPEI calculation is shown in Figure 3.
Figure 40 — Flow of the SPEI calculation.
WHU has provided the SPEI drought index calculation services through the OGC API — Processes, enabling interaction with other components. The current endpoint for OGC API — Processes is http://oge.whu.edu.cn/ogcapi/processes_api. This section will explain how to use this API for calculating the drought index.
Example:/processes http://oge.whu.edu.cn/ogcapi/processes_api/processes The API endpoint for retrieving the processes list.
Example:/processes/{processId} http://oge.whu.edu.cn/ogcapi/processes_api/processes/spei The API endpoint for retrieving a process description (e.g. spei). This returns the description of “spei” process, which contains the inputs and outputs information.
Example:/processes/{processId}/execution http://oge.whu.edu.cn/ogcapi/processes_api/processes/spei/execution The API endpoint for executing the process. The spei process exclusively supports asynchronous execution, resulting in the creation of a job for processing. The request body:
{ “inputs”: { “startTime”: “2010-01-01”, “endTime”: “2020-01-01”, “timeScale”: 5, “extent”: { “bbox”: [73.95, 17.95, 135.05, 54.05], “crs”: “http://www.opengis.net/def/crs/OGC/1.3/CRS84” } } }
Example:/processes/{processId}/jobs/{jobId} http://oge.whu.edu.cn/ogcapi/processes_api/processes/spei/jobs/{jobId} The API endpoint for retrieving status of a job.
Example:/processes/{processId}/jobs/{jobId}/results http://oge.whu.edu.cn/ogcapi/processes_api/processes/spei/jobs/{jobId}/results The API endpoint for retrieving the results of a job, which are encoded as : [{ “value”: { “time”: “2000_02_01”, “url”: “http://oge.whu.edu.cn/api/oge-python/data/temp/9BC500C1B0E3438C090AF5C6F8602045/8d0357fb-8ffb-4e62-9c3a-55ad17a5831a/SPEI_2000_02_01.png” } }, …… ]
Figure 41 — The SPEI results for the date 2000_02_01.
5.4.3. Component: Data Cube
Inputs: ERA5 temperature and precipitation data
Outputs: Results in the form of GeoTIFF after processing in Data Cubes
What other component(s) can interact with the component: Any components requiring access to temperature and precipitation data in part of Asia through OGC API
What OGC standards or formats does the component use and produce: OGC API- Coverages
WHU has introduced GeoCube as a cube infrastructure for the management and large-scale analysis of multi-source data. GeoCube leverages the latest generation of OGC standard service interfaces, including OGC API-Coverages, OGC API-Features, and OGC API-Processes, to offer services encompassing data discovery, access, and processing of diverse data sources. The UML model of the GeoCube is given in Figure 5, and it has four dimensions: product, spatial, temporal, and band. Product dimension specifies the thematic axis for the geospatial data cube using the product name (e.g. ERA5_Precipitation or OSM_Water), type (e.g. raster, vector, or tabular), processes, and instrument name. For example, the product dimension can describe optical image products by recording information on the instrument and band. Spatial dimension specifies the spatial axis for the geospatial data cube using the grid code, grid type, city name, and province name. The cube uses a spatial grid for tiling to enable data readiness in a high-performance form. Temporal dimension specifies the temporal axis for the geospatial data using the phenomenon time and result time. Band dimension describes the band attribute of the raster products according to the band name, polarization mode that is reserved for SAR images, and product-level band. The product-level band is the information that is extracted from the original bands. For example, the Standardized Precipitation Evapotranspiration Index (SPEI) band is a product-level band that takes into account the hydrological process and evaluates the degree of drought by calculating the balance of precipitation and evaporation.
Figure 42 — The UML model of WHU Data Cube.
WHU has organized ERA5 temperature and precipitation data into a cube and offers climate data services through the OGC API — Coverages, supporting the computation of various climate indices. The API endpoint is http://oge.whu.edu.cn/ogcapi/coverages_api, allowing users to query and retrieve the desired data from the cube. This section provides examples demonstrating how to access the data from the cube using OGC API — Coverages.
Example:/collections http://oge.whu.edu.cn/ogcapi/coverages_api/collections?bbox=112.65942,29.23223,115.06959,31.36234=10=2016-01-01T02:55:50Z/2018-01-01T02:55:50Z The API endpoint for querying datasets from the cube, and the query parameters including limit, bbox, and time.
Example:/collections/{collectionId} http://oge.whu.edu.cn/ogcapi/coverages_api/collections/2m_temperature_201602 The API endpoint for retrieving the description of the coverage with the specified ID from the cube.
Example:/collections/{collectionId}/coverage http://oge.whu.edu.cn/ogcapi/coverages_api/collections/2m_temperature_201602/coverage The API endpoint for retrieving the coverage in GeoTIFF format for the specified ID. Here is an example of the response:
Figure 43 — The coverage with the ID "2m_temperature_201602" in the Asian region.
Example:/collections/{collectionId}/coverage/rangetype http://oge.whu.edu.cn/ogcapi/coverages_api/collections/2m_temperature_201602/coverage/rangetype The API endpoint for accessing the range type of the coverage, which is part of the band dimension members in the cube. In this example, the coverage consists of only one band dimension member.
Example:/collections/{collectionId}/coverage/domainset http://oge.whu.edu.cn/ogcapi/coverages_api/collections/2m_temperature_201602/coverage/domainset The API endpoint for the domain set of the coverage, which is also the domain set of the cube.
6. ARD to Decision Ready Indicator (DRI)
6.1. Intact Financial Corporation
Component:
Wildfire hazard map
Inputs:
National fire database polygon data
Fire Weather Index (FWI) daily maps
Land cover maps
Drought conditions
Digital Elevation Model (DEM)
Population density
Outputs:
Wildfire hazard map, in GeoTiff
Regional risk indicators, as tabular output in CSV format
What other component(s) can interact with the component: Intact’s component is developed for internal use. It is deployed in highly secured data environments, and as such it cannot readily interact with other components of the pilot.
What OGC standards or formats does the component use and produce:
GeoTiff
NetCDF
WPS
6.1.1. Company Description
Intact is the largest provider of Property & Casualty insurance in Canada. Its purpose is to help people, businesses and society prosper in good times and be resilient in bad times. The company has been on the front lines of climate change with our customers for more than a decade – getting them back on track and helping them adapt. As extreme weather is going to get worse over the next decade, Intact intends to double down on adapting to this changing environment and be better prepared for floods, wildfire and extreme heat.
6.2. Pelagis
The effects of climate change on coastal environments cannot be understated. As the carrying capacity of our oceans as a carbon sink is reaching its limits, research suggests that an integrated approach to oceans resource management can sustainably meet the needs of global food supplies, offset the rate of ocean acidification, and permanently remove CO2 from the atmosphere. However, accurately measuring both the effect of climate change as well as the mitigation effects of nature-based approaches remains a challenge.
Our participation in the Climate Resilience pilot focuses on enhancing our view of a global oceans observation system by combining real-world ground observations with analysis ready datasets. Monitoring aspects of our oceans through both a temporal and spatial continuum while providing traceability through the observations process allows stakeholders to better understand the stressors affecting the health of our oceans and investigate opportunities to mitigate the longer term implications related to climate change.
6.2.1. About Us
Pelagis is an OceanTech venture located in Nova Scotia, Canada. Our foundation focuses on the application of open geospatial technology and standards designed to promote the sustainable use of our ocean resources. As a member of the Open Geospatial Consortium, we co-chair the Marine Domain Working Group with a focus on developing a federated model of the marine ecosystem (MSDI) compliant with the suite of OGC specifications and standards.
6.2.2. Scope of Work
The project effort centers around 3 key challenges * the ability to collect data relevant to Climate Resilience; * the ability to apply the data in a coherent and standardized manner in which to draw out context; * and the ability to impart insight to community members and stakeholders so as to identify, anticipate and mitigate the effects of climate change across both local and international boundaries.
Each of these activities aligns with the best practices and standards of the OGC and are used as input to the MarineDWG MSDI reference model.
6.2.3. Approach
Our approach to address the needs for the shared use of our ocean resources is to make Marine Spatial Planning a core foundation on which to build out vertical applications. Our platform is based on a federated information model represented as a unified social graph. This provides a decentralized approach towards designing various data streams each represented by their well-known and/or standardized model. To date, service layers based on the OGC standards for Feature, Observations & Measurements, and Sensors APIs have been developed and extended for adoption within the marine domain model. Previous work provides for data discovery and processing of features based on the IHO S-100 standard (Marine Protected Areas, Marine Traffic Management, …); NOAA open data pipelines for major weather events (Hurricane Tracking, Ocean Drifters, Saildrones …); as well as connected observation systems as provided by IOOS and its Canadian variant, CIOOS.
Figure 44 — Architecture
6.3. ECMWF — Copernicus (will be integrated with INTRODUCTION section)
Component: Copernicus services.
Outputs: Copernicus Services, including Climate Data Store (CDS) https://cds.climate.copernicus.eu/ and Atmosphere Data Store (ADS) https://ads.atmosphere.copernicus.eu/.
What other component(s) can interact with the component: CDS and ADS provide access to data via different interfaces: UI and API. It also offers a toolbox with a set of expert libraries to perform advanced operations on the available data. CDS and ADS catalogue metadata is also accessible via standard CSW. https://cds.climate.copernicus.eu/geonetwork/srv/eng/csw?SERVICE=CSW=2.0.2=GetCapabilities
What OGC standards or formats does the component use and produce:
CDS and ADS catalogues exposed via CSW.
Access to ESGF datasets via WPS.
WMS is offered in some published applications.
CADS 2.0 (under construction) will implement OGC APIs.
7. From Data to Visualization
7.1. 5D Meta World
Presagis offered the V5D rapid 3D (trial) Digital Twin generation capability to Laubwerk Presagis gathered open source GIS dataset for the Hollywood region in order to match the location of the tree dataset from Laubwerk Using V5D, Presagis created a representative 3D digital twin of the building and terrain. Presagis imported Laubwerk tree point dataset providing vegetation type information inside V5D Presagis provided V5D Unreal plugin to Laubwerk in order to allow the insertion of the Laubwerk 3D tree (as Unreal assets) into the scene. Using V5D, Laubwerk is capable of adapting the tree model in order to demonstrate the impact of climate change on the city vegetation
Presagis also provided to Laubwerk its V5D AI extracted vegetation dataset in order to complement the existing tree dataset as needed.
Figure 45
7.2. Visualization Climate Impact on Vegetation
One of the biggest challenges in communicating climate change is to tie global changes to the local impact they will have. Photorealistic visualization is a critical component for assessing and communicating the impact of environmental changes, and possibilities for mitigation. For this to work, it is crucial for visualizations to reflect the underlying data accurately and allow for quick iteration. In this regard, manual visualization processes are inferior. As much as possible, visualizations of real-life scenarios should be driven directly by available data of present states and simulations of possible scenarios. Our contribution is a first attempt at doing just that, determining what already works and what doesn’t with existing data and technology.
As our contribution to the Climate Resilience Pilot we explored such data-driven high-quality visualizations, focusing on the impact on vegetation. Due to the nature of this being a pilot, we constrained ourselves in terms of coverage area, to account for limited time and to cope with potentially limited data availability. This ensured that we were able to make the full connection from input data to final visualization, drawing valuable conclusions for broader application in the future. This size limitation will allow us to produce meaningful results if data transfer and processing is slow or even if it must be processed in manual or half-automated ways due to inconsistent formatting. It also lets us visualize a high level of detail without having to account too much for the sheer amount of data we could face with very large areas.
We selected a relatively small section of Los Angeles for actual visualization. The rationale behind this choice of location had several components:
The given area that will (and already does) see considerable direct impact of climate change through heat, drought, wildfires, etc.
It contains different areas of land use (from deeply urban and sub-urban to unmanaged areas).
Since it is part of a major metro area, the results will be relevant to a large population base
Some known mitigation measures that can be considered for visualization are in place.
Other external (non climate change) known influences on vegetation, such as pests, irrigation limitations, known life spans of relevant plant species, etc.) are in play that could be considered.
7.2.1. Source Data
Our visualization ties data that is very global together with data that is hyper-local. That means we need to draw on data from a wide variety of sources that are not usually combined. Examples of data sources used for our visualization are:
Satellite Imagery
Building Footprints and Heights
Plant Inventory from Bureau of Street Services and Department of Recreation and Parks
Results from Climate Models, Ideally in an Analysis Ready Format
3D Plant Models from the Laubwerk database
Plant Metadata to Judge Climate Change Impact on Specific Species through given Environmental factors, also from the Laubwerk database
Information on local mitigation measures from various sources
7.2.2. Results
The aforementioned data sources were combined to create a detailed visualization of the area in question. The images below show a visualization of the status quo.
Figure 46 — Overview of the Visualized Region
Figure 47 — Above the Corner Sunset Blvd and N Curson Ave Looking North-East